One of the most successful applications of Naïve Bayes has been within the field of Natural Language Processing (NLP). NLP is a field that has been much related to machine learning, since many of its problems can be formulated as a classification task. Usually, NLP problems have important amounts of tagged data in the form of text documents. This data can be used as a training dataset for machine learning algorithms. In this section, we will use Naïve Bayes for text classification; we will have a set of text documents with their corresponding categories, and we will train a Naïve Bayes algorithm to learn to predict the categories of new unseen instances. This simple task has many practical applications; probably the most known and widely used one is spam filtering. In this section we will try to classify newsgroup messages using a dataset that can be retrieved from within scikit-learn. This dataset consists of around 19,000 newsgroup messages from 20 different topics ranging from politics and religion to sports and science
Start by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).
In [1]:
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
Import the newsgroup Dataset, and explore its structure and data (this could take some time, especially if sklearn has to download the 14MB dataset from the Internet)
In [2]:
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
Let's explore the dataset structure:
In [3]:
news.keys()
Out[3]:
If we look at the properties of the dataset, we will find that we have the usual ones: DESCR, data, target, and target_names. The difference now is that data holds a list of text contents, instead of a numpy matrix:
In [4]:
print type(news.data), type(news.target), type(news.target_names)
print news.target_names
print len(news.data)
print len(news.target)
If you look at, say, the first instance, you will see the content of a newsgroup message, and you can get its corresponding category:
In [5]:
print news.data[0]
print news.target[0], news.target_names[news.target[0]]
Let's build the training and testing datasets:
In [6]:
SPLIT_PERC = 0.75
split_size = int(len(news.data)*SPLIT_PERC)
X_train = news.data[:split_size]
X_test = news.data[split_size:]
y_train = news.target[:split_size]
y_test = news.target[split_size:]
This function will serve to perform and evaluate a cross validation:
In [7]:
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
def evaluate_cross_validation(clf, X, y, K):
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(len(y), K, shuffle=True, random_state=0)
# by default the score used is the one returned by score method of the estimator (accuracy)
scores = cross_val_score(clf, X, y, cv=cv)
print scores
print ("Mean score: {0:.3f} (+/-{1:.3f})").format(
np.mean(scores), sem(scores))
Our machine learning algorithms can work only on numeric data, so our next step will be to convert our text-based dataset to a numeric dataset. Currently we only have one feature, the text content of the message; we need some function that transforms a text into a meaningful set of numeric features. Intuitively one could try to look at which are the words (or more precisely, tokens, including numbers or punctuation signs) that are used in each of the text categories, and try to characterize each category with the frequency distribution of each of those words. The sklearn. feature_extraction.text module has some useful utilities to build numeric feature vectors from text documents.
If you look inside the sklearn.feature_extraction.text module, you will find three different classes that can transform text into numeric features: CountVectorizer, HashingVectorizer, and TfidfVectorizer. The difference between them resides in the calculations they perform to obtain the numeric features. CountVectorizer basically creates a dictionary of words from the text corpus. Then, each instance is converted to a vector of numeric features where each element will be the count of the number of times a particular word appears in the document. HashingVectorizer, instead of constricting and maintaining the dictionary in memory, implements a hashing function that maps tokens into feature indexes, and then computes the count as in CountVectorizer. TfidfVectorizer works like the CountVectorizer, but with a more advanced calculation called Term Frequency Inverse Document Frequency (TF-IDF). This is a statistic for measuring the importance of a word in a document or corpus. Intuitively, it looks for words that are more frequent in the current document, compared with their frequency in the whole corpus of documents. You can see this as a way to normalize the results and avoid words that are too frequent, and thus not useful to characterize the instances.
We will create a Naïve Bayes classifier that is composed of a feature vectorizer and the actual Bayes classifier. We will use the MultinomialNB class from the sklearn.naive_bayes module.
In [8]:
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer
clf_1 = Pipeline([
('vect', CountVectorizer()),
('clf', MultinomialNB()),
])
clf_2 = Pipeline([
('vect', HashingVectorizer(non_negative=True)),
('clf', MultinomialNB()),
])
clf_3 = Pipeline([
('vect', TfidfVectorizer()),
('clf', MultinomialNB()),
])
In [9]:
clfs = [clf_1, clf_2, clf_3]
for clf in clfs:
evaluate_cross_validation(clf, news.data, news.target, 5)
We will keep the TF-IDF vectorizer but use a different regular expression to pefrom tokenization. The default regular expression: ur"\b\w\w+\b" considers alphanumeric characters and the underscore. Perhaps also considering the slash and the dot could improve the tokenization, and begin considering tokens as Wi-Fi and site.com. The new regular expression could be: ur"\b[a-z0-9-.]+[a-z][a-z0-9-.]+\b". If you have queries about how to define regular expressions, please refer to the Python re module documentation. Let's try our new classifier:
In [10]:
clf_4 = Pipeline([
('vect', TfidfVectorizer(
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('clf', MultinomialNB()),
])
In [11]:
evaluate_cross_validation(clf_4, news.data, news.target, 5)
Another parameter that we can use is stop_words: this argument allows us to pass a list of words we do not want to take into account, such as too frequent words, or words we do not a priori expect to provide information about the particular topic. Let's try to improve performance filtering the stop words:
In [12]:
def get_stop_words():
result = set()
for line in open('data/stopwords_en.txt', 'r').readlines():
result.add(line.strip())
return result
In [13]:
stop_words = get_stop_words()
print stop_words
In [14]:
clf_5 = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('clf', MultinomialNB()),
])
In [15]:
evaluate_cross_validation(clf_5, news.data, news.target, 5)
Try to improve by adjusting the alpha parameter on the MultinomialNB classifier:
In [16]:
clf_7 = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('clf', MultinomialNB(alpha=0.01)),
])
In [17]:
evaluate_cross_validation(clf_7, news.data, news.target, 5)
The results had an important boost from 0.89 to 0.92, pretty good. At this point, we could continue doing trials by using different values of alpha or doing new modifications of the vectorizer. In Chapter 4, Advanced Features, we will show you practical utilities to try many different configurations and keep the best one.
If we decide that we have made enough improvements in our model, we are ready to evaluate its performance on the testing set.
In [18]:
from sklearn import metrics
def train_and_evaluate(clf, X_train, X_test, y_train, y_test):
clf.fit(X_train, y_train)
print "Accuracy on training set:"
print clf.score(X_train, y_train)
print "Accuracy on testing set:"
print clf.score(X_test, y_test)
y_pred = clf.predict(X_test)
print "Classification Report:"
print metrics.classification_report(y_test, y_pred)
print "Confusion Matrix:"
print metrics.confusion_matrix(y_test, y_pred)
In [19]:
train_and_evaluate(clf_7, X_train, X_test, y_train, y_test)
As we can see, we obtained very good results, and as we would expect, the accuracy in the training set is quite better than in the testing set. We may expect, in new unseen instances, an accuracy of around 0.91.
If we look inside the vectorizer, we can see which tokens have been used to create our dictionary:
In [20]:
clf_7.named_steps['vect'].get_feature_names()
Out[20]:
In [21]:
print len(clf_7.named_steps['vect'].get_feature_names())